Response of Beetles to Global Change: the past Is a Clue to the Future
نویسندگان
چکیده
ago, led to the reduction in habitat and the extinction or restriction of several species of beetles. Much later in the midnineteenth century, the arrival of Europeans and their cultivation practices modified the insect fauna of the American Midwest so profoundly that the event is as detectable in the fossil record as any ice age climate change. The lesson from the fossil record of insects is that with increasing human disturbance there will be more extinction of insect species. EVIDENCE DELIMITING PAST GLOBAL CLIMATE CHANGES Bluemle, John P., North Dakota Geological Survey, Bismarck, ND; Joseph M. Sabel, U.S. Coast Guard, Oakland, CA; and Wibjörn Karlén, Stockholm University, Stockholm, Sweden Politicians and the media assume Earth’s climate is warming as the result of human activity. Various types of evidence of previous climate changes were investigated as a means of testing the validity of assigning anthropogenic causes to this change. Data with broad geographic coverage indicative of temperature and climate were evaluated. It included records of glacial advance and retreat, sedimentologic evidence of sea level change and glacial activity, palynologic indications of species succession, dendrochronologic evidence of treegrowth response to environment, and continental-ice core parameters indicating accumulation rates as well as other climate surrogates. Also reviewed were historical sources such as explorers’ journals, which document significant climate effects over time. Each type of evidence has particular value. Among these are preservation of regional versus local conditions, transport in or out of the system, age–date reliability, correlation between data types, and data (as well as human) bias. All the data indicate that the Holocene has been characterized by ten or more global “little ice ages” irregularly spaced. Each lasted a few centuries separated by sometimes sudden and dramatic global warming events. It is difficult to develop precise paleothermometry. Qualitative evaluations indicate frequent, sudden, and dramatic climate changes. Changes can be rapid, swinging from warmer than today to full glacial conditions within 100 years. The converse can be true. All available data indicate that current climate change is no greater in rate or magnitude, and probably less in both, than many changes that have occurred in the past. THE LATE PALEOCENE THERMAL MAXIMUM: ANCIENT GLOBAL WARMING AT MODERN RATES? Bralower, Timothy J., Geology Department, University of North Carolina, Chapel Hill, NC; Lisa Sloan and James Zachos, Earth Science Department, University of California, Santa Cruz, CA One of the most abrupt and dramatic ancient global warming events took place z 55 Ma in the late Paleocene epoch. This event, known as the Late Paleocene Thermal Maximum (LPTM), involved warming of high latitude and subtropical oceanic surface waters by up to 6 8 C and deep waters by up to 8 8 C. Deep-water warming and consequent oxygen deficiency led to the most severe mass extinction of deep-sea faunas in the last 90 million years. By contrast, the LPTM is also associated with major speciation of planktic foraminifers and terrestrial mammals. The event corresponds to a large (3 per mil), negative carbon isotope excursion (CIE) that suggests major changes in the nature of carbon cycling. The CIE has been used to correlate the LPTM between terrestrial and marine sediments. Current estimates of the duration of the LPTM range between 50–200 thousands of years. The onset of the event, including the full magnitude of the CIE, is thought to span 2–10 thousands of years. This initial rate of CO 2 input is comparable with anthropogenic input of fossil fuels. Possible sources of CO 2 and warming mechanisms include dissociation of methane hydrates along continental margins and a major episode of effusive volcanism in the North Atlantic. For causal mechanisms to be fully tested, however, more precise estimates of the duration of the onset are required. In this talk, a state-of-the-art chronology of the LPTM was presented along with a comparison of LPTM and modern CO 2 input and warming rates. SURPRISES IN THE GREENHOUSE Broecker, Wallace S., Lamont-Doherty Earth Observatory of Columbia University, Palisades, NY During the last glacial period, Earth’s climate underwent frequent large and abrupt global changes. This behavior appears to reflect the ability of the ocean’s thermohaline circulation to assume more than one mode of operation. The record in ancient sedimentary rocks suggests that similar abrupt changes plagued the Earth at other times. The trigger mechanism for these reorganizations may have been the antiphasing of polar insolation associated with orbital cycles. Were the ongoing increase in atmospheric CO 2 levels to trigger another such reorganization, it would be bad news for a world striving to feed 11–16 billion people. FOSSIL LEAVES AS BIOSENSORS OF EOCENE PALEOATMOSPHERIC CO 2 Dilcher, David L., Florida Museum of Natural History, University of Florida, Gainesville, FL; 152 E N V I R O N M E N T A L G E O S C I E N C E S Wolfram M. Kuerschner, Henk Visscher, and Friederike Wagner, Laboratory of Paleobotany and Palynology, Utrecht University, The Netherlands During the Cretaceous, as broad-leaved flowering plants evolved leaf forms similar to those seen in many flowering plants today, they had to develop physiological responses to changes in the supply of basic resources such as [CO 2 ]. As the amount of [CO 2 ] varied, the anatomy of the leaves accommodated these fluctuations, displaying physiologically determined signals that can be used to discriminate atmospheric levels of [CO 2 ]. A decreasing number of stomata (air exchange holes in a leaf) on leaves of broad-leaved trees, as determined by stomatal index, indicates an increase in [CO 2 ]. The stomatal index represents the ratio of the number of stomata per unit leaf area to the number of total epidermal cells per unit leaf area, thus expressing frequency independently of variation in epidermal cell size, serving as a sensitive parameter for detecting stomatal frequency changes. In comparing the stomatal indexes of leaves of the loblolly bay ( Gordonia lasianthus ) with those of 100-yearold leaves and of related Eocene leaves, significant differences were observed between these groups which appear to be directly related to the atmospheric [CO 2 ] present when the leaves grew. Our preliminary results indicate that the Eocene p[CO 2 ] was the order of 450–500 ppmv at a time when the earth was significantly warmer than today. Analysis of stomatal indexes over time provides a useful method for quantifying past environmental changes of [CO 2 ]. A 400-MILLION-YEAR RECORD OF ATMOSPHERIC CARBON DIOXIDE DEDUCED FROM PEDOGENIC CARBONATES Ekart, Douglas D., Energy and Geoscience Institute, University of Utah, Salt Lake City, UT and Thure E. Cerling, Department of Geology and Geophysics, University of Utah, Salt Lake City, UT Calcium carbonate commonly accumulates in soils where mean annual precipitation is , 100 cm, a condition met by a large fraction of the Earth’s terrestrial surface. Pedogenic carbonates are a common component of fossil soils (paleosols). The carbon isotope composition of pedogenic carbonates meeting specific criteria can be related to ecological conditions and the concentration of carbon dioxide in the atmosphere. We have collected and analyzed pedogenic carbonates from hundreds of paleosols. These data have been combined with a large number of analyses from the literature. The combined data include paleosols from five continents and formed throughout the last 400 million years. Application of Cerling’s CO 2 paleobarometer to these data has significantly constrained the history of atmospheric CO 2 through this time period. Results indicate that atmospheric pCO 2 has significantly fluctuated throughout the Phanerozoic. High concentrations of atmospheric CO 2 levels in the Late Silurian and Devonian dropped to low concentrations in the Late Paleozoic. CO 2 levels increased dramatically in the Triassic and subsequently dropped throughout the rest of the Mesozoic, reaching levels comparable with the modern condition prior to the Cretaceous–Tertiary boundary. Levels remained relatively low throughout the Cenozoic. GEOLOGICAL CONSTRAINTS ON GLOBAL CLIMATE VARIABILITY Gerhard, Lee C., Kansas Geological Survey, Lawrence, KS Anthropogenic forcing of global climate is a concept with great political appeal but generally ignores basic concepts of science, particularly geological knowledge. Much of the current popular debate focuses on decadal variation in global temperature and ignores the natural variability inherent in geological records ranging in scale from centuries to eras. Discussions about geological constraints require understanding that geological science views the earth processes as being in disequilibrium, geology is a temporal science, and the energy budget of the earth is controlled by radiogenic and solar inputs creating a single dynamic Earth system. One illustration of the geological constraints on global climate is the congruence of widespread glacial episodes (icehouses) and warm periods (greenhouses) with continental plate configurations. During icehouse events (Late Precambrian, Carboniferous, Neogene), continents are arranged so as to disrupt equatorial ocean currents, distributing heat unevenly and providing polar moisture to sustain largescale glaciers. During greenhouse events, earth-circling equatorial currents are presumed from lack of physical barriers. The conclusion is that tectonic distribution of topography and placement of continents control the geometry of ocean currents which in turn determine Earth climate. TWO MILLENNIA OF EL NIÑO EVENTS POTENTIALLY ARCHIVED IN SCLEROSPONGES Thayer, Charles W. and Gary Hughes, Earth and Environmental Science, University of Pennsylvania, Philadelphia, PA; Kyger C. Lohmann, Department of Geological Science, University of Michigan, Ann Arbor, MI Sclerosponges have great potential as temperature (T) recorders; they precipitate carbon and oxygen isotopes in apparent equilibrium with seawater. These animals lack phoA A P G A N N U A L M E E T I N G A B S T R A C T S 153 tosynthetic symbionts, which simplifies interpretation of their isotopic record. It also allows them to live in subphotic depths, recording T below the thermocline and as deep as the carbonate compensation depth. The coralline sponge Acanthocaetetes wellsi is widespread in caves in the western Tropical Pacific. The West Pacific warm pool accumulates here before moving eastward to the Americas during El Niño events. A. wellsi has distinct growth bands averaging z 1 mm/yr. Skeletal 13 C and 18 O show millimeter-scale cyclicity, apparently due to annual T variation, and large amplitude swings that occur every four to seven cycles, likely indicating El Niño events. Because individual sponges live for several centuries, they can provide high-resolution records of preand post-industrial El Niños. Additionally, the caves contain both live and dead A. wellsi. Cross-correlation of successively older specimens should yield a 1000–2000-year record. The area, depth, and T distribution of prior warm pools will be defined. The velocity of their eastward movement may be determinable from East Pacific A. wellsi and American Mytilus. From these data, estimates of heat transfer rates can be derived, allowing determination of past El Niño intensities. By correlating past intensities with known impacts, refined prediction of El Niño effects will be possible. STABLE ISOTOPES AND THEIR RELATIONSHIP TO TEMPERATURE AS RECORDED IN LOW-LATITUDE ICE CORES Thompson, Lonnie G., Department of Geological Sciences and Byrd Polar Research Center, The Ohio State University, Columbus, OH The potential of stable isotope ratios ( 18 O/ 16 O and 2 H/ 1 H) of water from midto low-latitude glaciers as a modern tool of paleoclimate reconstruction is reviewed. To interpret quantitatively the ice core isotopic records, the response of the isotopic composition of precipitation to long-term fluctuations of key climatic parameters (temperature, precipitation amount, relative humidity) over the given area should be known. Furthermore, it is important to establish the transfer functions relating the climate-induced changes of the isotopic composition of precipitation to the isotope record preserved in the glacier. This paper presented longterm perspectives of isotopic composition variations in ice cores spanning the last 30,000 years from midto low-latitude ice cores. Also presented were on-going calibration studies in Tibet, China and Sajama, Bolivia where the oxygen isotopic ratios ( d 18 ) of precipitation samples collected over several years at meteorological stations have been analyzed to investigate the relationship between d 18 O and contemporaneous air temperature. The isotopic composition of precipitation should be viewed not only as a powerful proxy climatic indicator but also as an additional parameter for understanding climate-induced changes in the water cycle, both on a regional and global scale. DIVISION OF ENVIRONMENTAL GEOSCIENCES: ENVIRONMENTAL CONSIDERATIONS IN EXPLORATION, PRODUCTION, AND THE AFTERMATH Presiding: L. Bruce and S. Halasz QUANTITATIVE RISK ANALYSIS OF NATURAL AND ENVIRONMENTAL PROCESSES IN THE URBAN SETTING Browne, Carolyn, Geologic-Environmental Management Systems, Tulsa, OK As the Millennium draws closer, media focus is fastened on natural disasters like flooding, storms—both tornados as well as hurricanes—and droughts. Not everything can be blamed on El Niño or La Niña. Geology plays a major role in urban environmental impact such as the underlying stratigraphy totally ignored by most developers as urban sprawl continues as the population escalates. When large threeand four-storied homes are built atop friable layers of soil and ancient stream beds or shorelines, with hillside slopes nearing the angle of repose, the inevitable happens when torrential rains hit—rains that appear heavier than normal after a drought, some droughts nearing 100-year records. Such homes tend to slide down hillsides or so much water cascades down driveways that act as funnels that the city’s paved roadway erodes within a year or two of placement. As urban development continues to place stresses and strains on natural and human-made resources, reliance upon quantitative risk analysis of natural and environmental processes may be geological tools of value to public administrators and other urban planners in the allocation of limited public financial resources in providing citizens with the type and quality of public services like water, sewers, and paved streets. ENVIRONMENTAL PROTECTION DURING EXPLORATION AND EXPLOITATION OF OIL AND GAS FIELDS Gildeeva, Irina, All Russia Petroleum Research Exploration Institute (VNIGRI), St. Petersburg, Russia Estimates of oil pollution show that every year, the surface of the globe is polluted by 30 million tons of oil. That is equivalent to the loss of one large oil field. Annual aver154 E N V I R O N M E N T A L G E O S C I E N C E S age oil losses in Russia alone are estimated to be 12 million tons in the last 2–3 decades. Lately, in Russia, up to 40,000 damages occur at the field pipelines of which at least 20 are significantly large. In the Komi Republic, an area of pastures damaged as a result of oil production totalled 17,200 hectares; in Western Siberia, up to 12.5% of all pastures are also damaged as a consequence of oil and gas field development. The author proposes to subdivide all the known oil and gas field types on the degree of their potential danger during field exploitation into five groups. This classification is the basis for constructing new environmental maps. These maps suggest that there is a potential for environmental damage by hydrocarbon field exploitation on the TimanPechora and Western Siberia provinces. It is indicated that it is necessary to create a defense system against oil pollution at all the stages of oil and gas field development. This system must include environmental audit monitoring, prevention of environmental pollution, and rehabilitation of soil and surface water. The characteristic of measures on pollution prevention, including engineer-technical, judicial measures, and the measures of prevention character, is given in the report. The last is illustrated by the technology of production and refining of high-viscosity, sulphurous, metalbearing oils. The description of one effective biological cleanup method, based on the application of NAPHTOX biopreparation and created at the VNIGRI, is given. Recommendations on accidental oil spill prediction and combating measures are discussed. REALISTIC EXPOSURE SCENARIOS: THE KEY TO SAVING TIME AND MONEY ON RISK-BASED CORRECTIVE ACTIONS Hippensteel, David L., U.S. Dept. of Energy– Nevada Operations Office, North Las Vegas, NV The primary objective of risk-based corrective action is to establish a direct relationship between the extent of a corrective action and the level of risk (potential harm from contaminant exposure) associated with no corrective action. In reaching this objective, one is attempting to ensure that remediation resources are expended efficiently to reduce contamination that poses the most risk. To determine if corrective action is necessary, a risk assessment is performed. The goal of risk assessment is to calculate (as close to reality as possible) the potential for harm to a given receptor from exposure to contamination. The results of risk assessment are often accompanied by uncertainty in almost every parameter used to produce a risk estimate. This uncertainty varies with exposure scenarios that risk assessors must consider during their work. Controlling uncertainty is critical to establishing the primary objective of risk-based corrective actions, that is, not wasting remediation resources on minor risks. Avoidance of this can only be accomplished by developing realistic exposure scenarios based on established facts or defensible trends supported by accepted evidence. More often than not, exposure scenarios used in risk assessments are “presumed” or prescribed by conservative regulations. The use of presumed or prescribed exposure scenarios defeats the primary objective of risk-based corrective actions by forcing remediation to protect improbable receptors. ENVIRONMENTAL COST ASSOCIATED WITH ABANDONMENT OF OIL PRODUCTION FACILITIES OR “WHAT HAPPENS WHEN THE WELLS GO DRY” Kent, Bob, Geomatrix Consultants, Inc., Newport Beach, CA and Mark Hemingway, Geomatrix Consultants, Inc., Austin, TX Environmental damage associated with oil and gas production operations is typically caused by releases of crude oil, other hydrocarbons liquids, produced water, or naturally occurring radioactive materials. These releases may be associated with wells, tank batteries, pits, or the piping systems that transport oil, gas, and water within and from the lease. The damage caused by these releases may be visible, such as large salt scars from water tank overtopping, or less apparent, such as soil or groundwater contamination from a tank or pipeline leak. Oil production leases are often sold many times between the initial discovery and when the time production becomes uneconomical. Environmental restoration costs may not be significant if averaged over the life of the lease. When production stops, however, the last owner of record may face large capital costs related to decommissioning production facilities. This may include landowner litigation. As oil and gas fields advance in their life cycles, purchasers should be aware of the past operating history and potential environmental liabilities and use this knowledge in structuring the lease acquisition. An environmental assessment provides the information needed to protect purchasers from excessive liability. USING TREES AS A BARRIER TO METALS-CONTAMINATED, SALINE GROUNDWATER Olson, Christopher, Amoco Corporation, Warrenville, IL; Frank Thomas, KMA Environmental, Texas City, TX; David Tsao, Amoco Corporation, Warrenville, IL; and Ari Ferro, Phytokinetics, North Logan, UT Groundwater in the shallow transmissive zone at an AMOCO site is highly saline and contaminated with inorganics and radionuclides that exceed the U.S. Environmental Protection Agency maximum concentration limits. Under a A A P G A N N U A L M E E T I N G A B S T R A C T S 155 voluntary cleanup program agreement, additional response actions would be required (i.e., additional containment, pump and treat, etc.) if the plume migrates past the compliance monitoring boundary. A dilution study and greenhouse feasibility study are under way to determine the suitability of installing a tree barrier strip at the site. The installation of dense rows of deeprooted water-loving trees perpendicular to groundwater flow and along the leading edge of the plume may serve as added insurance against further off-site migration; the trees essentially acting as a flow confinement system. A PRACTICAL OVERVIEW OF REGULATIONS GOVERNING OIL SPILLS FROM OIL AND GAS PRODUCING FACILITIES IN TEXAS Railsback, Rick, Cura, Inc., Dallas, TX Relevant legislation and regulations contained in and resulting from the Clean Water Act; the Oil Pollution Act of 1990; the Texas Oil Spill Prevention and Response Act; national, regional, area, and state spill contingency plans; and Texas Railroad Commission regulations are briefly reviewed. This legislation and regulations are summarized in a checklist of seven essential requirements for operators to follow to comply with all applicable federal regulations and state regulations specific to Texas. These essential requirements are listed as follows: (1) Do not spill oil into or on navigable waters or on land. (2) Immediately report all spills resulting in a sheen of oil on navigable waters and over five barrels of oil on land to the proper governmental agencies. (3) In cleaning up and mitigating spills, follow a response plan and cooperate fully with federal and state agencies. (4) Obtain insurance or other proof of financial responsibility in compliance with the provisions of Oil Pollution Act of 1990. (5) Develop and implement a Spill Prevention Control and Countermeasures plan and, if necessary, have the plan approved by the federal government. (6) Develop and implement a response plan(s) that will satisfy requirements of the Clean Water Act, Oil Pollution Act of 1990, and Texas Oil Spill Prevention and Response Act and have the plan(s) approved by the federal and/or state government. (7) Obtain an Oil Spill Prevention and Response Certificate from the Texas General Land Office. REGULATION OF HAZARDOUS WASTE IN THE OIL FIELD: THE RAILROAD COMMISSION OF TEXAS’ APPROACH Sims, Bart C., Railroad Commission of Texas, Austin, TX Many wastes generated in association with crude oil and natural gas exploration and production activities are exempt from regulation as hazardous waste. However, nonexempt oil and gas waste is subject to a hazardous waste determination, and if determined to be hazardous waste, is subject to standards for management of hazardous waste. Hazardous waste management standards are established by the federal Resource Conservation and Recovery Act, Subtitle C (RCRA Subtitle C). A state may enforce these standards through a hazardous waste program authorized by the U.S. Environmental Protection Agency (EPA), or EPA may retain RCRA Subtitle C Authority in a state. The Railroad Commission of Texas enforces standards equivalent to RCRA Subtitle C through Statewide Rule 98, Standards for Management of Hazardous Oil and Gas Waste. The Railroad Commission of Texas’ hazardous oil and gas waste program has not yet been authorized by EPA; therefore, the Railroad Commission of Texas and EPA share parallel authority over hazardous oil and gas waste in Texas. Statewide Rule 98 is structured to address the application of federal hazardous waste regulation to the unique circumstances of oil and gas operations. This paper provides an overview of the regulatory process for hazardous oil and gas waste in Texas, including the application of important exemptions and exclusions and the most common applicable management standards. PETROLEUM HYDROCARBON FINGERPRINTING QUANTITATIVE INTERPRETATION: DEVELOPMENT AND CASE STUDY FOR USE IN ENVIRONMENTAL FORENSIC INVESTIGATIONS Wigger, John W., Environmental Liability Management, Inc., Tulsa, OK; Dennis D. Beckmann, Amoco Corporation, Tulsa, OK; Bruce E. Torkelson, Torkelson Geochemistry, Inc., Tulsa, OK; and Atul X. Narang, Amoco Corporation, Naperville, IL Hydrocarbon characterization (fingerprinting) is a technique that uses gas chromatograms to identify petroleum hydrocarbons as to type of product based on boiling range and other definitive characteristics. Identifying and comparing samples are not straightforward: the composition of a single product type can vary, the composition of samples can change after release into the environment (weathering), and multiple releases can form complex mixtures. Hydrocarbon characterization is typically done by visual examination and comparison of chromatograms, and the outcome is dependent on the expertise and experience of the interpreter(s). This paper reports on work to establish a more quantitative and less subjective process. First, a database was created of . 60 known hydrocarbon samples representing 156 E N V I R O N M E N T A L G E O S C I E N C E S streams such as gasoline, kerosene, naphtha, reformate, jet fuel, diesel, fuel oil, hydraulic oil, lubricating oil, crude oil, and other refinery intermediates. Second, a statistical correlation algorithm has been developed to evaluate and compare chromatography characteristic numerically. The techniques were used effectively in a case study involving an investigation of released hydrocarbon products at a refinery process unit. The techniques were instrumental in helping differentiate multiple sources and characterize the subsurface extent of the hydrocarbons. APPLICATIONS OF FORENSIC CHEMISTRY FOR PETROLEUM CASES Zemo, Dawn A., Geomatrix Consultants, San Francisco, CA Forensic chemistry is useful for petroleum hydrocarbon investigations or litigation for three primary reasons: (1) petroleum products are chemically complex and can be highly variable in composition within certain performance-based ranges; (2) routine U.S. Environmental Protection Agency analytical methods only generalize the nature of petroleum products and reflect little of the chemical detail needed for forensic purposes; and (3) crude oils and products weather in the environment and change in chemical composition over time. Forensic chemistry is frequently used to answer questions about the identification or age of petroleum in the subsurface. This presentation provided examples of multiple applications of forensic chemistry, including gas chromatography pattern-matching for product identification, discriminating between weathered fuel oils based on families of aromatic hydrocarbons, determining whether polynuclear aromatic hydrocarbons are of petroleum or combustion origin, using key discrete constituent analysis (e.g., PIANO) to distinguish between products of similar type or boiling range, and age dating products using key additives. The best forensic interpretations rely on multiple lines of evidence and must incorporate the effects of weathering and changing refinery and transportation practices and avoid the pitfall of confusing weathering and age. DIVISION OF ENVIRONMENTAL GEOSCIENCES/ENERGY MINERALS DIVISION: CO 2 SEQUESTRATION, ENVIRONMENTAL MANAGEMENT SYSTEMS, AND OTHER ENVIRONMENTAL TOPICS Presiding: M. M. Lee and W. P. Wilbert GEOLOGIC DISPOSAL OF CARBON DIOXIDE EMITTED BY THE UPSTREAM ENERGY INDUSTRY: THE POTENTIAL FOR THE ALBERTA BASIN Bachu, Stefan, Alberta Energy and Utilities Board, Edmonton, AB, Canada Carbon dioxide is a greenhouse gas that is believed to cause global warming and climate change. To mitigate these effects, reduction of CO 2 emissions in the shortto longterm can be achieved by a combination of various actions such as improving energy efficiency, CO 2 utilization, and CO 2 sequestration in biomass, oceans, and geological media. Sedimentary basins are naturally associated with fossil energy resources whose exploitation leads itself to CO 2 production and emissions to the atmosphere. For landlocked regions such as Alberta, sequestration of CO 2 in geological media is probably the only viable solution for reducing CO 2 emissions. Basically, there are five ways for CO 2 sequestration in sedimentary basins: use in enhanced oil recovery, storage in depleted oil and gas reservoirs, storage in saltcaverns, replacement of methane in coal beds by CO 2 injection, and hydrodynamic entrapment and mineral immobilization in deep saline aquifers. Successful CO 2 sequestration depends on basin tectonics, hydrocarbon potential and maturity, and the hydrodynamic regime of formation waters. The Alberta basin is one of the few basins in the world that meet all the criteria and have all the options for CO 2 sequestration in geological media. It has extensive, thick salt beds; abundant oil, gas, and coal and huge tar sand resources; it is located on a tectonically stable Precambrian platform; the hydrodynamic regime of formation waters is extremely favorable; and it has already in place the necessary technology and infrastructure for CO 2 deep injection. CO 2 is already used in a few enhanced oil recovery operations and is injected as acid gas (CO 2 -H 2 S) in several depleted reservoirs and deep saline aquifers. CARBON DIOXIDE SEQUESTRATION POTENTIAL IN COAL DEPOSITS Byrer, Charles W. and Hugh D. Guthrie, U.S. Department of Energy–FETC, Morgantown, WV The concept of using gassy unmineable coalbed for carbon dioxide (CO 2 ) storage while concurrently initiating and enhancing coalbed methane production may be a viable near-term system for industry consideration. Coal is our most abundant and cheapest fossil fuel resource, and it has played a vital role in the stability and growth of the U.S. economy. The energy source is also one of the fuels causing large CO 2 emissions with the burning of coal in power plants. In the near future, coal may also have a role in solvA A P G A N N U A L M E E T I N G A B S T R A C T S 157 ing environmental greenhouse gas concerns with increasing CO 2 emissions throughout the world. Coal resources may be an acceptable “geological sink” for storing CO 2 emissions in amenable unmineable coalbeds while significantly increasing the production of natural gas (CH 4 ) from gassy coalbeds. Industry proprietary research has shown that the recovery of coalbed methane can be enhanced by the injection of CO 2 over methane which could allow for the potential of targeting unmineable coals near fossil-fueled power plants to be utilized for storing stack gas CO 2 . Preliminary technical and economic assessments of this concept appear to merit further research leading to pilot demonstrations in selected regions of the United States. The benefits for considering and using unmineable coalbeds for a system concept of CO 2 -CH 4 cycle include the following: (1) CO 2 is captured from power plant flue gas, pressurized, and transported to injection wells completed in deep unmineable coals; (2) Coals near existing power plants have enormous capacity to store CO 2 while enhancing CH 4 production; (3) Coal reserves underlie many U.S. power plants with as many as 90% unmineable; and (4) Injection of CO 2 into unmineable gassy coals allows for displacement of one molecule of sorbed CH 4 while two or more molecules of CO 2 are sequestered on the coal surface. EXPLORING FOR OPTIMAL GEOLOGICAL ENVIRONMENTS FOR CARBON DIOXIDE DISPOSAL IN SALINE AQUIFERS IN THE UNITED STATES Hovorka, Susan D. and Alan R. Dutton, Bureau of Economic Geology, The University of Texas at Austin, Austin, TX Saline aquifers have been widely recognized as having high potential for very long term (geologic time scale) sequestration of greenhouse gasses, particularly CO2. The same properties that make saline aquifers desirable for sequestration, isolation from the surface and minimal use as a resource, however, make for typically poor characterization. The significant variables affecting the usefulness of an aquifer for CO2 sequestration include porosity, permeability, compartmentalization, aquifer depth, pressure, temperature, thickness, water chemistry, rock mineralogy, and aquifer flow rate. Reservoir characterization and geologic play approaches are used to extend our knowledge from wellknown areas (saline aquifers closely associated with hydrocarbon production) to poorly known areas (potentially largevolume, unproductive saline aquifer targets for sequestration) by applying conceptual geologic and hydrologic models. Although reservoir characterization and play approaches are standard techniques for hydrocarbon exploration and development, they require adaptation for use in exploring for optimal hydrogeologic settings for CO2 injection in various geologic environments. LAND SUBSIDENCE ALONG THE TEXAS GULF COAST DUE TO OIL AND GAS WITHDRAWAL Khorzad, Kaveh, Department of Geological Sciences, The University of Texas at Austin, Austin, TX Land subsidence caused by groundwater withdrawal in the Houston–Galveston region is a well-documented phenomenon. Subsidence of up to 3 m has been calculated in the region since 1905. Subsidence caused by hydrocarbon withdrawal is also a plausible cause of subsidence, where groundwater withdrawal has diminished and significant petroleum production has occurred for .95 years. Sixteen fields were investigated by acquiring reservoir depressurization data near bore-hole extensometers set up by the Houston–Galveston Coastal Subsidence District. All reservoirs were found to be well below hydrostatic pressure; a few of them were underpressured even before production began. Four oil and gas fields (the Mykawa, Satsuma, Dyersdale, and South Gillock) and three production zones (the Miocene, Frio, and Yegua) were used in a reservoir model and a boundary clay reservoir model to calculate subsidence. Subsidence under these fields is predicted to be as high as 0.44 m in a 19-year period at the Satsuma field and as low as 0.02 m in a 22-year period at the Dyersdale field. Implications of this study are (1) hydrocarbon production, although not the major contributor to most land surface subsidence in this area, does play a role; and (2) depressurization and, subsequently, subsidence from oil and gas fields may be regional and connected with other fields which is inferred from the fact that some fields were already underpressured before production began. GUIDANCE FOR A FULLY INTEGRATED HEALTH, SAFETY AND ENVIRONMENT MANAGEMENT SYSTEM Knode, Thomas L. and Steve Abernathy, Halliburton Energy Services, Houston, TX One of the keys to implementing structural controls that guarantee continual improvement is a comprehensive management system. Health, Safety and Environmental (HSE) Management systems have historically been separate from main stay processes of a company. This distinction may hinder full implementation of the system because operations personnel do not consider HSE to be integral to their function. The Halliburton Management System (HMS) is an integrated management system that provides a structure cov158 E N V I R O N M E N T A L G E O S C I E N C E S ering HSE and quality within the framework of each activity. Processes are mapped in HMS and feedback is captured with the Correction Prevention Improvement system. HMS represents five key activities in practice. It includes the purpose and vision of the company, a formal system for the feedback of performance measures, customer and employee satisfaction, planning activities, and a system for making improvements to the system. The HMS is designed to focus on performance rather than compliance. By focusing on the process as a whole, the purpose, as defined in the mission statement, remains within sight. Based on the strategy of the company, plans are developed and implemented to ensure the proper resources are in place. This includes development of personnel, purchase of capital equipment and inventories, and HSE elements. Because the HMS documents this process, a guide helps eliminate the inefficiencies. This planning also helps integrate HSE management up front through documented risk assessment and control. This system then meets the ISO requirements. GROUTING MONITOR WELLS—IT’S ALL IN THE MIX Mathewson, Christopher C. and Lloyd E. Morris, Department of Geology and Geophysics, Texas A&M University, College Station, TX Monitor wells are frequently sealed using a cement–bentonite grout mixture because it is believed that the addition of bentonite (1) reduces shrinkage of the cement, (2) increases cement plasticity, (3) reduces curing temperatures, and (4) reduces grout weight. The design of a compound grout appears to be simple: add a fixed percentage of premium bentonite, by dry-weight, per sack of Portland cement and increase the volume of mix water from 5.2 gal/sack by 1.3 gal/2% bentonite added. This formula, however, is only valid if the bentonite is dry-blended with the cement before water is added. In most environmental applications, cement–bentonite mixing is performed at the job site. The driller has two options: (1) mix the cement with water and then add the dry bentonite or (2) mix the bentonite with water and then add the cement. Both of these lead to problems; in the first case, the bentonite does not fully blend with the grout, and in the second case the bentonite consumes all of the available water. The solution is to add more water. An 8% bentonite–cement compound grout can easily be mixed and pumped if the bentonite and cement are first dry blended, a problem not easily solved on an environmental drill site. If the cement is hydrated first, an additional 10 gal of water must be added; if the bentonite is hydrated first, at least 20 gal of water must be added to the blend. When expanded, high-yield bentonite is used, .30 gal of additional water must be added. These high-water content environmental grouts are very low density, very low strength, and highly permeable. A strong quality assurance/quality control program for cement–bentonite grout specifications must be established. BEYOND COMPLIANCE: ENVIRONMENTAL MANAGEMENT STRATEGIES FOR THE NEXT MILLENIUM Sabel, Joseph M., Oakland, CA With the growth of stringent environmental laws in the 1970s, corporate strategies were characterized by ignorance, denial, and finally, forced acceptance. Regulations expanded faster than corporate culture could adjust. Spurred by public activism, it became essential for politicians to “do something” about the environment. And they did with a vengeance. By the late 1980s, nearly every company had an environmental program. The stand-alone environmental department was ubiquitous. Expensive, rigid, command and control management was established to reduce liability exposure. As we enter the twenty-first century, this is simply not good enough. These management structures are significant cost centers. They are unable to effect continuous improvement. They place their organizations at a competitive disadvantage in the global marketplace. To reduce costs, gain efficiency, and continue to reduce exposure, corporations must adjust their cultures. Environmental compliance must become integrated vertically through the entire structure, from CEO to janitor. These changes will not occur so the marketing department can promote “green” products. It won’t happen out of altruism. Nor will it occur because it is required, although all are true. Businesses will do these things simply because of the positive impact on the bottom line. Each and every change will have to pass a strict cost–benefit analysis. The result will be faster, better, and cheaper operation. The best are already there. MN OXIDE CONCENTRATION AS EVIDENCE OF A PATHWAY FOR INFILTRATION OF CRUDE OIL INTO A SHALLOW AQUIFER, WEST TEXAS Smyth, Rebecca C., The University of Texas at Austin, Bureau of Economic Geology, Austin, TX In November 1991, landowners near Abilene, Texas, found crude oil in their water well. Subsequent drilling (four cores and 30 borings) defined a plume of crude oil (z300 bbl) floating on shallow, perched groundwater. Data suggest that the oil came from a near-surface leak associated with oil-production activities. Crude oil is present in a thin (0.5 ft), silty sand layer 17.7–19 ft below the surface. BeA A P G A N N U A L M E E T I N G A B S T R A C T S 159 cause of water level fluctuation, traces of oil also occur along fractures as deep as 35 ft in two cores collected within the crude-oil plume. The presence of manganese (Mn) oxide coatings along fracture surfaces might prove to be a record of the path of oil as it infiltrated the subsurface. Mn oxide minerals are concentrated along fracture surfaces to depths of 20 ft in two cores located nearest the suspected crude-oil source. Changes in redox conditions and increased microbial activity associated with the crude oil probably caused dissolution, followed by reprecipitation and concentration of Mn oxides. Other effects of crude-oil degradation include high unsaturated zone methane concentrations in a halo around the oil plume. Methane was measured in boreholes at concentrations mainly between 5–50% but locally as high as 98% at depths of 8–10 ft. The methane is most likely a result of both volatilization and biodegradation of the crude oil. Coincident with the methane plume are zones of high carbon dioxide (as much as 10%) and low oxygen (as little as 1.9%) content. DIVISION OF PROFESSIONAL AFFAIRS/ENERGY MINERALS DIVISION/DIVISION OF ENVIRONMENTAL GEOSCIENCES: DISTRIBUTED POWER IN THE OIL AND GAS PATCH Presiding: J. B. Platt and J. M. Fay DISTRIBUTED GENERATION USING STRANDED GAS IN A COMMODITY ELECTRICAL MARKET: ECONOMIC OPPORTUNITIES, TECHNICAL AND OPERATIONAL CONSTRAINTS Cousino, Dennis, Benham Holway Power Group, Tulsa, OK The new structure of the electrical power industry has dramatically changed the opportunities available to independent power producers. Gone are the days when the developer and the utility argued, sometimes for years, at a utility commission trying to establish the rates for energy and capacity from a project. Open access legislation and advances in metering technology and information systems have revolutionized the production, sale, transportation, and purchase of electricity. A commodity market has developed, in which the prices are set by supply and demand, and a producer’s profit is determined by the true cost of production. Because fuel is the single largest component of delivered power cost, innovative suppliers will be exploring ways to use energy that is priced below the traditional, commercially available sources. Gas that is stranded, unable to be sold through conventional means, has the potential to become a valuable component in the energy supply mix if a few basic principles are followed. Critical relationships between gas price, generation cost, and power market price are described, as well as relationships between key stakeholders: gas producer, independent power producer, interconnected power utility, and power marketer. Drawing on experience gained in field operation of a stranded gas program, the talk addresses financing mechanisms, contractual mechanisms, and design and operation choices for generation equipment to facilitate transactions. ON-SITE ELECTRIC GENERATION OPPORTUNITIES FOR OIL AND GAS PRODUCERS Mantey, Vern, Mercury Electric Corporation, Calgary, AB, Canada Deregulation of the electric industry is creating opportunities for oil and gas producers to take more control of their energy costs. On-site electric generation using small (,100 kW) turbine generators provides an alternative to utility connection. The new generation of recuperated mini-turbines have high electric efficiency, medium power density, and inherent high reliability due to multiple unit configurations. SCADA compatibility and minimal on-site service requirements minimize operation and maintenance costs. Existing field staff are sufficient for normal day-to-day operation. Small turbines can use flare gas or other low pressure, slightly “off-spec” gas sources as fuel. Other benefits include less downtime due to electric interruptions and reduction of greenhouse gas emissions when using flare gas as fuel. Waste heat produced is not usually economic to recover in existing facilities but should be considered in new installations. There may be cases in which selling energy in the form of electricity rather than gas is a viable alternative. These situations are very case specific and will depend on the jurisdiction, off-site electric prices, and proximity to gas transmission infrastructure. Mini-turbine technology offers opportunities that did not exist previously but the opportunities will require creativity and effort to realize. OVERVIEW OF SMALL-SCALE ELECTRIC GENERATION TECHNOLOGIES, FUEL REQUIREMENTS, AND COSTS O’Sullivan, John, EPRI, Palo Alto, CA Although field power generation is not new, most applications have greatly exceeded 1 MW. Recent advances in 160 E N V I R O N M E N T A L G E O S C I E N C E S fuel cell and microturbine technology will provide economic power plants in the ,300 kW size range. These developments are derivative of research and development efforts focused on vehicular propulsion. Small engines for hybrid electric vehicles and fuel cells for electric vehicles are yielding technology that can be applied to stationary power. The fuels of choice for market entry products are natural gas and propane with some minor emphasis on methanol. At least six fuel cell developers are emphasizing power plants in the 2–5 kW range. The early market products will begin to appear in 1999. The four major microturbine developers are introducing products in the 30–75 kW range. These began to enter the market in late 1998. These small power plants should see wide application in the oil and gas industry if the claims for efficiency, reliability, and cost are substantiated by their field operation. Most systems are designed for operation with pipeline quality fuels. Available fuels in the field may have quality issues that will negatively impact both fuel cell and turbine operation. The fuel processing subsystems for fuel cells are especially sensitive to sulfur species and to heavy hydrocarbons, unless designed for their presence. The microturbines operate most effectively on fuels at 3–4 atm; otherwise a cost and efficiency penalty is taken for a gas compressor. Developers are projecting costs in the $500-1000/kw range. It remains to be seen whether product orders will provide a manufacturing volume that will meet these cost targets. ON-SITE ELECTRIC GENERATION IN ENERGY AND AGRICULTURE Priddy, Ritchie, KN Energy, Lubbock, TX In a time of uncertainty and little construction of new power plants, distributed generation (DG) has taken on an importance few people realized just a few years ago. In reaction to these uncertainties and transmission constraints, opportunities for tapping into areas of trapped gas in production fields for fuel for rural DG purposes have risen dramatically. This presentation identifies these opportunities and describes how to overcome obstacles and work with key players (including electric companies). Another opportunity for DG exists using financial tools to arbitrage natural gas for electricity on state-owned lands. This power could be used for self-generation in the production field or could be transported to a larger grid for resell. What were once fierce competitors are becoming close allies. Electric cooperatives in the Panhandle of Texas are summer-peaking due to heavy irrigation loads. Natural gas companies are winter-peaking. Under some deregulation scenarios, on-peak power for these low load factor customers will rise 20–30% over present levels. The co-ops, being nongenerators, will likely see increased ratchet costs (from 60%–80%) that forces them to pay literally millions of dollars per year for power they never take. Strategically located DG assets can help reduce purchased power costs during the summer by shaving peaks and can be, perhaps, baseloaded. The goal is to optimize efficiencies between the two fuels by leveling load profiles. DIVISION OF PROFESSIONAL AFFAIRS/ENERGY MINERALS DIVISION/DIVISION OF ENVIRONMENTAL GEOSCIENCES: ELECTRIC DEREGULATION AND POWER FUNDAMENTALS Presiding: J. B. Platt and J. M. Fay POWER PRICING—VARIATIONS AND VOLATILITY Miller, David A. and Fred James, Pace Resources, Inc., Fairfax, VA Under conditions of traditional regulation, the wholesale cost of electricity is generally identified with utility lambda—the marginal cost of generation. It is the most expensive unit dispatched to meet demand in any hour that sets the system price for that hour. Therefore, the highest power cost regions are usually those where older oiland gas-fired plants spend a lot of time on the margin. One can develop expectations for future prices by comparing the portfolio of generating plants to forecasts of demand. But in the still developing competitive wholesale market, electricity prices can reach levels not predictable by these fundamental factors. In the summer of 1998, hot weather, transmission congestion, and intense speculation led to prices in the Midwestern United States of over $3,000/MWhr, more than 100 times typical prices. As retail electricity markets around the country restructure, this trend to increased volatility will continue. It will be driven by the need of generators to recover their capital and fixed costs from the competitive market instead of ratepayers. In comparison, the natural gas market, at its most volatile, may reach peak prices of only two to four times average. The relationship between gas and electricity prices—also known as the spark spread—is therefore important to resource managers and power plant operators both on an average and seasonal basis. Clearly, there is opportunity but also risk in playing both markets simultaneously. HOW THE POWER GRID BEHAVES Overbye, Thomas J., University of Illinois at Urbana–Champaign, Urbana, IL The nation’s electric power grid is an interconnected network of generation groups called control areas. Each control area has the responsibility to serve power to its residential, A A P G A N N U A L M E E T I N G A B S T R A C T S 161 commercial, and industrial customers, even in the event of unforeseen disturbances. A control area must fulfill this responsibility in both a reliable and a cost-effective manner: reliably, so that its customers don’t wind up in the dark; and inexpensively, so that it can remain competitive in an increasingly competitive deregulated marketplace. The highvoltage transmission system ties each control area to its neighbor, enabling the control area to buy and sell power with its neighbors. Transacting power with its neighbors helps a control area fulfill its responsibilities. This presentation explains many of the fundamental issues surrounding the operation of the interconnected power system. It identifies the various components of the system, including the generators, loads, and transmission lines that comprise each control area, and demonstrates how automatic generation control is employed to keep pace with changing power demand. Furthermore, the presentation emphasizes the variety of issues associated with transacting power between areas, identifying the reliability and economic issues and indicators that may shape a control area’s decision to engage in a power transaction. The highly graphical and interactive PowerWorld Simulator software package is used to communicate these lessons clearly and effectively. POSSIBLE IMPACTS OF ELECTRIC RESTRUCTURING ON GAS USE FOR POWER GENERATION Platt, Jeremy B., EPRI, Palo Alto, CA; James M. Fay, GRI, Chicago, IL; Stephen L. Thumb and A. Michael Schaal, Energy Ventures Analysis, Inc., Arlington, VA; and Frank C. Graves and Lynda S. Borucki, The Brattle Group, Cambridge, MA Electric industry restructuring is proceeding rapidly in many states, yet impacts on the generation mix and thus fuels used for power generation are likely to be modest. The principal reason is that the power industry is highly heterogeneous. Generation units have typically been built close to load centers, taking advantage of local fuel economies, whereas transmission transfer capabilities between regions are limited. These conclusions are based on systematic study by EPRI and GRI of regional generation costs and transmission links. The objective has been to ascertain “big picture” effects, such as whether coal may displace more costly natural gas generation. Still, substantial new gas-fired capacity is being added, and in some regions (e.g., New England and Texas) the numbers of proposed projects is astronomic. Actual growth will be limited by power price feedbacks as well as gas supply and delivery limitations. While substantial shifts in fuel use from electric industry restructuring alone appear unlikely, numerous “wild cards” affect this conclusion. One is whether restructuring will indeed lead to lower electricity prices. Most important over the long term is the added effect of heightened environmental pressures on coal generation. Impacts on coal and nuclear generation are mixed, with some units becoming more competitive and others retiring and replacement capacity coming from a variety of sources. Quantitative findings on these interrelationships are summarized and compared with results from other studies. FUNDAMENTALS OF ELECTRIC DEREGULATION Yokell, Michael D., Hagler Bailly Consulting, Inc., Boulder, CO The talk provides an overview of recent regulatory initiatives at the federal and state level concerning the deregulation of the electric power industry. The effects of these initiatives on the IPP (independent power production) sector is emphasized. A discussion of the evolution of contracting for the sale of power from the passage of the Public Utility Regulatory Policies Act in 1978 through the present is presented. The relationship between new merchant IPPs and the power exchange (a power spot market trading center) in places like California where deregulation is already in place is covered.
منابع مشابه
Land Use Change Simulation Using CLUE-s Model in the Watershed of Doroodzan Dam
Land use change is an essential determinant factor in hydrological processes and it affects many of them. Simulation of land use changes, rise in residential and agricultural area and its effects are among the most important issues related to the watershed management. The objective of this research is land use change detection in the past and simulation of future land use using CLUE-s Model. At...
متن کاملThe Future of Disease Control Priorities; Comment on “Disease Control Priorities Third Edition Is Published: A Theory of Change Is Needed for Translating Evidence to Health Policy”
The Disease Control Priorities (DCP) project has substantially influenced national and global health priorities since 1993. DCP’s basic framework involves identification of disease burdens based on premature deaths and disability and application of the most cost-effective interventions to the largest burdens, taking into account local feasibility. The future impact of DCP will need to take into...
متن کاملEvaluation of the performance of the CMIP5 General Circulation Models in predicting the Indian Ocean Monsoon precipitation over south Sistan and Baluchestan, using the past hydrological changes in the region
1-Introduction Climate change refers to any significant change in the existing mean climatic conditions within a certain time period (Jana and Majumder, 2010; Giorgi, 2006). Earth's climate change through history has happened (Nakicenovic et al., 2000; Bytnerowicz et al., 2007). 2-Materials and methods In this study, daily precipitation and daily maximum (Tmax) and daily minimum (Tmin) tempera...
متن کاملتاثیر تغییر اقلیم بر شدت و دوره بازگشت خشکسالی های ایران
Due to the growth of industries and factories, deforestation and other environmental degradation as well as greenhouse gases have been increasing on the Earth's surface in recent decades. This increase disturbs the climate of the Earth and is called climate change. An Increase in greenhouse gases in the future could exacerbate the climate change phenomenon and have several negative consequences...
متن کاملFuture climate change impact on hydrological regime of river basin using SWAT model
Hydrological components in a river basin can get adversely affected by climate change in coming future. Manipur River basin lies in the extreme northeast region of India nestled in the lesser Himalayan ranges and it is under severe pressure from anthropogenic and natural factors. Basin is un-gauged as it lies in remote location and suffering from large data scarcity. This paper explores the imp...
متن کاملبـررسی پتـانسیل اثـرات تغییر اقلیـم بر خشکسـالیهای آینـده کشـور با استفـاده از خروجی مـدلهای گـردش عمـومی جـو
A Study of the Potential Impact of Climate Change on the Future Droughts in Iran by Using the Global Circulation Models as Outputs Gholamreza Roshan Assistant Professor in climatology, Department of Geography, Golestan University, Gorgan, Iran Mohammad Saeed Najafi MSc Student in Climatology, Faculty of Geography, Tehran University, Tehran, Iran. Extended Abstract 1- Introductio...
متن کامل